Serveur d'exploration Debussy

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Aligning music audio with symbolic scores using a hybrid graphical model

Identifieur interne : 000477 ( Main/Exploration ); précédent : 000476; suivant : 000478

Aligning music audio with symbolic scores using a hybrid graphical model

Auteurs : Christopher Raphael [États-Unis]

Source :

RBID : ISTEX:698850BDEB0B2344798C55B050A2C90ACF55CFEA

Descripteurs français

English descriptors

Abstract

Abstract: We present a new method for establishing an alignment between a polyphonic musical score and a corresponding sampled audio performance. The method uses a graphical model containing both latent discrete variables, corresponding to score position, as well as a latent continuous tempo process. We use a simple data model based only on the pitch content of the audio signal. The data interpretation is defined to be the most likely configuration of the hidden variables, given the data, and we develop computational methodology to identify or approximate this configuration using a variant of dynamic programming involving parametrically represented continuous variables. Experiments are presented on a 55-minute hand-marked orchestral test set.

Url:
DOI: 10.1007/s10994-006-8415-3


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Aligning music audio with symbolic scores using a hybrid graphical model</title>
<author>
<name sortKey="Raphael, Christopher" sort="Raphael, Christopher" uniqKey="Raphael C" first="Christopher" last="Raphael">Christopher Raphael</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:698850BDEB0B2344798C55B050A2C90ACF55CFEA</idno>
<date when="2006" year="2006">2006</date>
<idno type="doi">10.1007/s10994-006-8415-3</idno>
<idno type="url">https://api.istex.fr/document/698850BDEB0B2344798C55B050A2C90ACF55CFEA/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">000604</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">000604</idno>
<idno type="wicri:Area/Istex/Curation">000604</idno>
<idno type="wicri:Area/Istex/Checkpoint">000421</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Checkpoint">000421</idno>
<idno type="wicri:doubleKey">0885-6125:2006:Raphael C:aligning:music:audio</idno>
<idno type="wicri:Area/Main/Merge">000476</idno>
<idno type="wicri:Area/Main/Curation">000477</idno>
<idno type="wicri:Area/Main/Exploration">000477</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">Aligning music audio with symbolic scores using a hybrid graphical model</title>
<author>
<name sortKey="Raphael, Christopher" sort="Raphael, Christopher" uniqKey="Raphael C" first="Christopher" last="Raphael">Christopher Raphael</name>
<affiliation></affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">États-Unis</country>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j">Machine Learning</title>
<title level="j" type="abbrev">Mach Learn</title>
<idno type="ISSN">0885-6125</idno>
<idno type="eISSN">1573-0565</idno>
<imprint>
<publisher>Kluwer Academic Publishers</publisher>
<pubPlace>Boston</pubPlace>
<date type="published" when="2006-12-01">2006-12-01</date>
<biblScope unit="volume">65</biblScope>
<biblScope unit="issue">2-3</biblScope>
<biblScope unit="page" from="389">389</biblScope>
<biblScope unit="page" to="409">409</biblScope>
</imprint>
<idno type="ISSN">0885-6125</idno>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0885-6125</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Graphical models</term>
<term>Music</term>
<term>Score following</term>
<term>Score matching</term>
</keywords>
<keywords scheme="Wicri" type="topic" xml:lang="fr">
<term>Musique</term>
</keywords>
</textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Abstract: We present a new method for establishing an alignment between a polyphonic musical score and a corresponding sampled audio performance. The method uses a graphical model containing both latent discrete variables, corresponding to score position, as well as a latent continuous tempo process. We use a simple data model based only on the pitch content of the audio signal. The data interpretation is defined to be the most likely configuration of the hidden variables, given the data, and we develop computational methodology to identify or approximate this configuration using a variant of dynamic programming involving parametrically represented continuous variables. Experiments are presented on a 55-minute hand-marked orchestral test set.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Raphael, Christopher" sort="Raphael, Christopher" uniqKey="Raphael C" first="Christopher" last="Raphael">Christopher Raphael</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/DebussyV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000477 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000477 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    DebussyV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     ISTEX:698850BDEB0B2344798C55B050A2C90ACF55CFEA
   |texte=   Aligning music audio with symbolic scores using a hybrid graphical model
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Tue Sep 25 16:34:07 2018. Site generation: Mon Mar 11 10:31:28 2024